15 research outputs found
Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using Image Sequence Classification
Effective transperineal ultrasound image guidance in prostate external beam
radiotherapy requires consistent alignment between probe and prostate at each
session during patient set-up. Probe placement and ultrasound image
inter-pretation are manual tasks contingent upon operator skill, leading to
interoperator uncertainties that degrade radiotherapy precision. We demonstrate
a method for ensuring accurate probe placement through joint classification of
images and probe position data. Using a multi-input multi-task algorithm,
spatial coordinate data from an optically tracked ultrasound probe is combined
with an image clas-sifier using a recurrent neural network to generate two sets
of predictions in real-time. The first set identifies relevant prostate anatomy
visible in the field of view using the classes: outside prostate, prostate
periphery, prostate centre. The second set recommends a probe angular
adjustment to achieve alignment between the probe and prostate centre with the
classes: move left, move right, stop. The algo-rithm was trained and tested on
9,743 clinical images from 61 treatment sessions across 32 patients. We
evaluated classification accuracy against class labels de-rived from three
experienced observers at 2/3 and 3/3 agreement thresholds. For images with
unanimous consensus between observers, anatomical classification accuracy was
97.2% and probe adjustment accuracy was 94.9%. The algorithm identified optimal
probe alignment within a mean (standard deviation) range of 3.7
(1.2) from angle labels with full observer consensus, comparable to
the 2.8 (2.6) mean interobserver range. We propose such an
algorithm could assist ra-diotherapy practitioners with limited experience of
ultrasound image interpreta-tion by providing effective real-time feedback
during patient set-up.Comment: Accepted to MICCAI 202
Adversarial Deformation Regularization for Training Image Registration Neural Networks
We describe an adversarial learning approach to constrain convolutional
neural network training for image registration, replacing heuristic smoothness
measures of displacement fields often used in these tasks. Using
minimally-invasive prostate cancer intervention as an example application, we
demonstrate the feasibility of utilizing biomechanical simulations to
regularize a weakly-supervised anatomical-label-driven registration network for
aligning pre-procedural magnetic resonance (MR) and 3D intra-procedural
transrectal ultrasound (TRUS) images. A discriminator network is optimized to
distinguish the registration-predicted displacement fields from the motion data
simulated by finite element analysis. During training, the registration network
simultaneously aims to maximize similarity between anatomical labels that
drives image alignment and to minimize an adversarial generator loss that
measures divergence between the predicted- and simulated deformation. The
end-to-end trained network enables efficient and fully-automated registration
that only requires an MR and TRUS image pair as input, without anatomical
labels or simulated data during inference. 108 pairs of labelled MR and TRUS
images from 76 prostate cancer patients and 71,500 nonlinear finite-element
simulations from 143 different patients were used for this study. We show that,
with only gland segmentation as training labels, the proposed method can help
predict physically plausible deformation without any other smoothness penalty.
Based on cross-validation experiments using 834 pairs of independent validation
landmarks, the proposed adversarial-regularized registration achieved a target
registration error of 6.3 mm that is significantly lower than those from
several other regularization methods.Comment: Accepted to MICCAI 201
Label-driven weakly-supervised learning for multimodal deformable image registration
Spatially aligning medical images from different modalities remains a
challenging task, especially for intraoperative applications that require fast
and robust algorithms. We propose a weakly-supervised, label-driven formulation
for learning 3D voxel correspondence from higher-level label correspondence,
thereby bypassing classical intensity-based image similarity measures. During
training, a convolutional neural network is optimised by outputting a dense
displacement field (DDF) that warps a set of available anatomical labels from
the moving image to match their corresponding counterparts in the fixed image.
These label pairs, including solid organs, ducts, vessels, point landmarks and
other ad hoc structures, are only required at training time and can be
spatially aligned by minimising a cross-entropy function of the warped moving
label and the fixed label. During inference, the trained network takes a new
image pair to predict an optimal DDF, resulting in a fully-automatic,
label-free, real-time and deformable registration. For interventional
applications where large global transformation prevails, we also propose a
neural network architecture to jointly optimise the global- and local
displacements. Experiment results are presented based on cross-validating
registrations of 111 pairs of T2-weighted magnetic resonance images and 3D
transrectal ultrasound images from prostate cancer patients with a total of
over 4000 anatomical labels, yielding a median target registration error of 4.2
mm on landmark centroids and a median Dice of 0.88 on prostate glands.Comment: Accepted to ISBI 201
Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalising neural network
Segmentation of the levator hiatus in ultrasound allows to extract biometrics
which are of importance for pelvic floor disorder assessment. In this work, we
present a fully automatic method using a convolutional neural network (CNN) to
outline the levator hiatus in a 2D image extracted from a 3D ultrasound volume.
In particular, our method uses a recently developed scaled exponential linear
unit (SELU) as a nonlinear self-normalising activation function, which for the
first time has been applied in medical imaging with CNN. SELU has important
advantages such as being parameter-free and mini-batch independent, which may
help to overcome memory constraints during training. A dataset with 91 images
from 35 patients during Valsalva, contraction and rest, all labelled by three
operators, is used for training and evaluation in a leave-one-patient-out
cross-validation. Results show a median Dice similarity coefficient of 0.90
with an interquartile range of 0.08, with equivalent performance to the three
operators (with a Williams' index of 1.03), and outperforming a U-Net
architecture without the need for batch normalisation. We conclude that the
proposed fully automatic method achieved equivalent accuracy in segmenting the
pelvic floor levator hiatus compared to a previous semi-automatic approach
Deep hashing for global registration of untracked 2D laparoscopic ultrasound to CT
PURPOSE: The registration of Laparoscopic Ultrasound (LUS) to CT can enhance the safety of laparoscopic liver surgery by providing the surgeon with awareness on the relative positioning between critical vessels and a tumour. In an effort to provide a translatable solution for this poorly constrained problem, Content-based Image Retrieval (CBIR) based on vessel information has been suggested as a method for obtaining a global coarse registration without using tracking information. However, the performance of these frameworks is limited by the use of non-generalisable handcrafted vessel features. METHODS: We propose the use of a Deep Hashing (DH) network to directly convert vessel images from both LUS and CT into fixed size hash codes. During training, these codes are learnt from a patient-specific CT scan by supplying the network with triplets of vessel images which include both a registered and a mis-registered pair. Once hash codes have been learnt, they can be used to perform registration with CBIR methods. RESULTS: We test a CBIR pipeline on 11 sequences of untracked LUS distributed across 5 clinical cases. Compared to a handcrafted feature approach, our model improves the registration success rate significantly from 48% to 61%, considering a 20Â mm error as the threshold for a successful coarse registration. CONCLUSIONS: We present the first DH framework for interventional multi-modal registration tasks. The presented approach is easily generalisable to other registration problems, does not require annotated data for training, and may promote the translation of these techniques
DeepReg: a deep learning toolkit for medical image registration
DeepReg (https://github.com/DeepRegNet/DeepReg) is a community-supported
open-source toolkit for research and education in medical image registration
using deep learning.Comment: Accepted in The Journal of Open Source Software (JOSS
Fan-Slicer: A Pycuda Package for Fast Reslicing of Ultrasound Shaped Planes
Fan-Slicer (https://github.com/UCL/fan-slicer) is a Python package that enables the fast sampling (slicing) of 2D ultrasound-shaped images from a 3D volume. To increase sampling speed, CUDA kernel functions are used in conjunction with the Pycuda package. The main features include functions to generate images from both 3D surface models and 3D volumes. Additionally, the package also allows for the sampling of images from curvilinear (fan shaped planes) and linear (rectangle shaped planes) ultrasound transducers. Potential uses of Fan-slicer include the generation of large datasets of 2D images from 3D volumes and the simulation of intra-operative data among others
Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalising neural network
Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.status: accepte